24. Logistic Regression Algorithm
Coding the Logistic Regression Algorithm
Now, let's get our hands dirty and code the logistic regression algorithm. It'll be easier than you think, given that the gradient of the error function has such a beautiful formula.
Below, you'll be given the code for the functions that calculate the sigmoid, the derivative of the sigmoid, the errors, and the prediction. You'll be asked to write the code for the functions dErrors
and gradientDescentStep
, which do the following:
dErrors
: This function should receiveX
andy
, and return a list of errors given by the formula for the gradient
(y - \hat{y}) (x_1, \ldots, x_n).gradientDescentStep
: In this function, you receiveX
,y
,W
andb
, and you need to upgrade the weights and the bias by subtracting the coordinates of the gradient, given in the functiondErrors
.
If you get stuck, feel free to look at solution.py
. But give it a try first!
Then, as before, click on test run
to graph the solution that the perceptron algorithm gives you. It'll actually draw a set of dotted lines, that show how the algorithm approaches to the best solution, given by the black solid line. This will also show a plot of the error, and you can see how it decreases as we get closer and closer to an optimal solution.
Feel free to play with the parameters of the algorithm (number of epochs, learning rate, and even the randomizing of the initial parameters) to see how your initial conditions can affect the solution!
Start Quiz:
import numpy as np
# Setting the random seed, feel free to change it and see different solutions.
np.random.seed(42)
def sigmoid(x):
return 1/(1+np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x)*(1-sigmoid(x))
def prediction(X, W, b):
return sigmoid(np.matmul(X,W)+b)
def error_vector(y, y_hat):
return [-y[i]*np.log(y_hat[i]) - (1-y[i])*np.log(1-y_hat[i]) for i in range(len(y))]
def error(y, y_hat):
ev = error_vector(y, y_hat)
return sum(ev)/len(ev)
# TODO: Fill in the code below to calculate the gradient of the error function.
# The result should be a list of three lists:
# The first list should contain the gradient (partial derivatives) with respect to w1
# The second list should contain the gradient (partial derivatives) with respect to w2
# The third list should contain the gradient (partial derivatives) with respect to b
def dErrors(X, y, y_hat):
DErrorsDx1 = # Fill in
DErrorsDx2 = # Fill in
DErrorsDb = # Fill in
return DErrorsDx1, DErrorsDx2, DErrorsDb
# TODO: Fill in the code below to implement the gradient descent step.
# The function should receive as inputs the data X, the labels y,
# the weights W (as an array), and the bias b.
# It should calculate the prediction, the gradients, and use them to
# update the weights and bias W, b. Then return W and b.
# The error e will be calculated and returned for you, for plotting purposes.
def gradientDescentStep(X, y, W, b, learn_rate = 0.01):
# TODO: Calculate the prediction
# TODO: Calculate the gradient
# TODO: Update the weights
# This calculates the error
e = error(y, y_hat)
return W, b, e
# This function runs the perceptron algorithm repeatedly on the dataset,
# and returns a few of the boundary lines obtained in the iterations,
# for plotting purposes.
# Feel free to play with the learning rate and the num_epochs,
# and see your results plotted below.
def trainLR(X, y, learn_rate = 0.01, num_epochs = 100):
x_min, x_max = min(X.T[0]), max(X.T[0])
y_min, y_max = min(X.T[1]), max(X.T[1])
# Initialize the weights randomly
W = np.array(np.random.rand(2,1))*2 -1
b = np.random.rand(1)[0]*2 - 1
# These are the solution lines that get plotted below.
boundary_lines = []
errors = []
for i in range(num_epochs):
# In each epoch, we apply the gradient descent step.
W, b, error = gradientDescentStep(X, y, W, b, learn_rate)
boundary_lines.append((-W[0]/W[1], -b/W[1]))
errors.append(error)
return boundary_lines, errors
0.78051,-0.063669,1
0.28774,0.29139,1
0.40714,0.17878,1
0.2923,0.4217,1
0.50922,0.35256,1
0.27785,0.10802,1
0.27527,0.33223,1
0.43999,0.31245,1
0.33557,0.42984,1
0.23448,0.24986,1
0.0084492,0.13658,1
0.12419,0.33595,1
0.25644,0.42624,1
0.4591,0.40426,1
0.44547,0.45117,1
0.42218,0.20118,1
0.49563,0.21445,1
0.30848,0.24306,1
0.39707,0.44438,1
0.32945,0.39217,1
0.40739,0.40271,1
0.3106,0.50702,1
0.49638,0.45384,1
0.10073,0.32053,1
0.69907,0.37307,1
0.29767,0.69648,1
0.15099,0.57341,1
0.16427,0.27759,1
0.33259,0.055964,1
0.53741,0.28637,1
0.19503,0.36879,1
0.40278,0.035148,1
0.21296,0.55169,1
0.48447,0.56991,1
0.25476,0.34596,1
0.21726,0.28641,1
0.67078,0.46538,1
0.3815,0.4622,1
0.53838,0.32774,1
0.4849,0.26071,1
0.37095,0.38809,1
0.54527,0.63911,1
0.32149,0.12007,1
0.42216,0.61666,1
0.10194,0.060408,1
0.15254,0.2168,1
0.45558,0.43769,1
0.28488,0.52142,1
0.27633,0.21264,1
0.39748,0.31902,1
0.5533,1,0
0.44274,0.59205,0
0.85176,0.6612,0
0.60436,0.86605,0
0.68243,0.48301,0
1,0.76815,0
0.72989,0.8107,0
0.67377,0.77975,0
0.78761,0.58177,0
0.71442,0.7668,0
0.49379,0.54226,0
0.78974,0.74233,0
0.67905,0.60921,0
0.6642,0.72519,0
0.79396,0.56789,0
0.70758,0.76022,0
0.59421,0.61857,0
0.49364,0.56224,0
0.77707,0.35025,0
0.79785,0.76921,0
0.70876,0.96764,0
0.69176,0.60865,0
0.66408,0.92075,0
0.65973,0.66666,0
0.64574,0.56845,0
0.89639,0.7085,0
0.85476,0.63167,0
0.62091,0.80424,0
0.79057,0.56108,0
0.58935,0.71582,0
0.56846,0.7406,0
0.65912,0.71548,0
0.70938,0.74041,0
0.59154,0.62927,0
0.45829,0.4641,0
0.79982,0.74847,0
0.60974,0.54757,0
0.68127,0.86985,0
0.76694,0.64736,0
0.69048,0.83058,0
0.68122,0.96541,0
0.73229,0.64245,0
0.76145,0.60138,0
0.58985,0.86955,0
0.73145,0.74516,0
0.77029,0.7014,0
0.73156,0.71782,0
0.44556,0.57991,0
0.85275,0.85987,0
0.51912,0.62359,0
def dErrors(X, y, y_hat):
DErrorsDx1 = [X[i][0]*(y[i]-y_hat[i]) for i in range(len(y))]
DErrorsDx2 = [X[i][1]*(y[i]-y_hat[i]) for i in range(len(y))]
DErrorsDb = [y[i]-y_hat[i] for i in range(len(y))]
return DErrorsDx1, DErrorsDx2, DErrorsDb
def gradientDescentStep(X, y, W, b, learn_rate = 0.01):
y_hat = prediction(X,W,b)
errors = error_vector(y, y_hat)
derivErrors = dErrors(X, y, y_hat)
W[0] += sum(derivErrors[0])*learn_rate
W[1] += sum(derivErrors[1])*learn_rate
b += sum(derivErrors[2])*learn_rate
return W, b, sum(errors)